- massively parallel computing
- Программирование: вычисления с массовым распараллеливанием, массовые параллельные вычисления
Универсальный англо-русский словарь. Академик.ру. 2011.
Универсальный англо-русский словарь. Академик.ру. 2011.
Massively parallel — is a description which appears in computer science, life sciences, medical diagnostics, and other fields. A massively parallel computer is a distributed memory computer system which consists of many individual nodes, each of which is essentially… … Wikipedia
Parallel computing — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computing … Wikipedia
Massively parallel processor array — A Massively Parallel Processor Array (MPPA) is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of … Wikipedia
Massively distributed collaboration — The term massively distributed collaboration was coined by Mitchell Kapor, in a presentation at UC Berkeley on 2005 11 09, to describe an emerging activity of wikis and electronic mailing lists and blogs and other content creating virtual… … Wikipedia
Optical computing — Today s computers use the movement of electrons in and out of transistors to do logic. Optical or Photonic computing is intended to use photons or light particles, produced by lasers or diodes, in place of electrons. Compared to electrons,… … Wikipedia
Massive parallel processing — (MPP) is a term used in computer architecture to refer to a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units. Early examples… … Wikipedia
Distributed computing — is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal … Wikipedia
Data Intensive Computing — is a class of parallel computing applications which use a data parallel approach to processing large volumes of data typically terabytes or petabytes in size and typically referred to as Big Data. Computing applications which devote most of their … Wikipedia
Fabric computing — or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a weave or a fabric when viewed collectively from a distance.[1] Usually this refers to a consolidated high performance computing… … Wikipedia
Benchmark (computing) — This article is about the use of benchmarks in computing, for other uses see benchmark. In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an… … Wikipedia
Locus Computing Corporation — Infobox Defunct Company company name = Locus Computing Corporation company company type = Private foundation = Start date|1982 founder = Gerald J. Popek location city = Inglewood, California location country = USA key people = Gerald J. Popek,… … Wikipedia